Fix setup.sh errors for Claude Code registration#9
Fix setup.sh errors for Claude Code registration#9alpiua wants to merge 63 commits intoanthroos:mainfrom
Conversation
Replace leftover ~/.claude-memory/ references with ~/.openexp/ so all OpenExp data lives under a single self-contained directory. Co-authored-by: Ivan Pasichnyk <ivanpasichnyk@gmail.com> Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
HIGH: - H1: Add Qdrant API key auth support (config, direct_search, setup.sh) - H2: Add STDIO-only security note to MCP server - H3: Fix JSONL append race on macOS with mkdir-based locking MEDIUM: - M1: Add 50MB file size limits and streaming reads for JSONL files - M2: Add fcntl.flock to Q-cache load_and_merge to prevent corruption - M3: Auto-compact watermark when processed_obs exceeds 10K entries - M4: Add input length validation to CLI (query 2K chars, 100 memory IDs) - M5: Explicit chmod 700 on temp directory in session-start hook - M6: Sanitize JSON parse error messages in MCP server - M7: Run Qdrant Docker container as non-root (--user 1000:1000) Co-authored-by: Ivan Pasichnyk <ivanpasichnyk@gmail.com> Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: Ivan Pasichnyk <ivanpasichnyk@gmail.com> Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
- Q-update: EMA → additive (Q = clamp(Q + α*r, floor, ceiling)) - q_init: 0.5 → 0.0 (memories earn value from zero) - q_ceiling: 1.0 added - Outcome resolver: CRM CSV transitions → memory rewards - client_id tagging on memories - resolve CLI command - session-end hook with retrieval reward loop - 73/73 tests pass Co-authored-by: Ivan Pasichnyk <ivanpasichnyk@gmail.com> Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
- Q-update: EMA → additive (Q = clamp(Q + α*r, floor, ceiling)) - q_init: 0.5 → 0.0 (memories earn value from zero) - q_ceiling: 1.0 added - Outcome resolver: CRM CSV transitions → memory rewards - client_id tagging on memories - resolve CLI command - session-end hook with retrieval reward loop - 73/73 tests pass Co-authored-by: Ivan Pasichnyk <ivanpasichnyk@gmail.com> Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
CRITICAL — Q-learning was broken: - All hardcoded q_init 0.5 → DEFAULT_Q_CONFIG["q_init"] (0.0) - Fallback values in search, hybrid, cli, mcp → 0.0 - New memories now correctly start at zero and earn value SECURITY: - session-end.sh: string interpolation → env vars (shell injection) - Resolver loading: allowlist openexp.resolvers.* prefix - Enrichment prompt: XML delimiters + injection guard - Error responses: generic messages, details only in logs - lifecycle.py: pass QDRANT_API_KEY to client CLEANUP: - Unused imports: math, tempfile, Set, ScrollRequest - fastembed added to requirements.txt - pydantic removed (transitive dep only) 73/73 tests pass Co-authored-by: Ivan Pasichnyk <ivanpasichnyk@gmail.com> Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
# Conflicts: # openexp/hooks/session-end.sh # openexp/ingest/__init__.py # openexp/outcome.py
…os#3) Ensures Claude Code always follows search-before/add-after pattern when working in the openexp directory. Includes Q-learning params (do not change), dual-repo workflow, and architecture overview. Co-authored-by: Ivan Pasichnyk <ivanpasichnyk@gmail.com> Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
…os#6) Ensures Claude Code always follows search-before/add-after pattern when working in the openexp directory. Includes Q-learning params (do not change), dual-repo workflow, and architecture overview. Co-authored-by: Ivan Pasichnyk <ivanpasichnyk@gmail.com> Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
For prompts >30 chars, inject a reminder to call search_memory before starting the task. Hooks do auto-recall, but targeted manual search catches context the auto-recall misses. Co-authored-by: Ivan Pasichnyk <ivanpasichnyk@gmail.com> Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
For prompts >30 chars, inject a reminder to call search_memory before starting the task. Hooks do auto-recall, but targeted manual search catches context the auto-recall misses. Co-authored-by: Ivan Pasichnyk <ivanpasichnyk@gmail.com> Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
…throos#5) Remove the >30 chars check — the reminder to call search_memory should appear on every non-trivial prompt, not only long ones. This ensures the Q-learning loop gets manual searches even for medium-length prompts. Co-authored-by: Ivan Pasichnyk <ivanpasichnyk@gmail.com> Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
…ss (anthroos#6) - Add license, Python, arXiv, and Claude Code badges to README - Add "Why OpenExp?" comparison table (vs Mem0, Zep/Graphiti, LangMem) - Add TIP callout box in Quick Start section - Add Citation/BibTeX section with arXiv:2603.07360 - Add CONTRIBUTING.md with dev setup and workflow - Add GitHub issue templates (bug report, feature request) - Add pull request template with checklist Co-authored-by: Ivan Pasichnyk <ivanpasichnyk@gmail.com> Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
…throos#7) - Fix BibTeX citation key (2025→2026) and add URL field - Replace real client name "SQUAD" with "Acme" in examples - Add Troubleshooting section (Docker, hooks, ingestion) - Add Documentation section linking to docs/ - Link Contributing section to CONTRIBUTING.md - Add Contributing to navigation bar Co-authored-by: Ivan Pasichnyk <ivanpasichnyk@gmail.com> Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Experiences allow the same memory to have different Q-values under
different domains (sales, coding, devops). This enables domain-specific
learning without losing cross-domain knowledge.
Key changes:
- Experience dataclass + YAML loading (search: user dir → bundled → default)
- QCache nested format: {mem_id: {experience: {q_data}}} with auto-migration
- compute_layer_rewards() shared helper (DRY: was duplicated in 3 files)
- 4 new MCP introspection tools (experience_info/top_memories/insights/calibrate)
- CLI --experience flag + experience list|show|stats subcommand
- Hooks propagate OPENEXP_EXPERIENCE env var
- Backward compatible: no env var = identical to current behavior
- 160/160 tests pass (23 new experience tests)
Co-authored-by: Ivan Pasichnyk <ivanpasichnyk@gmail.com>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Add dealflow experience for deal pipeline workflows (lead → payment) with 5 new reward signals: proposal_sent, invoice_sent, call_scheduled, nda_exchanged, payment_received. Weights derived from real CRM data. Add comprehensive experiences guide (docs/experiences.md) with: - Full signal tables for all 3 shipped experiences - Step-by-step creation guide with questionnaire - Rating-to-weight conversion table - DevOps, Content, Researcher example profiles Update README, configuration, and how-it-works docs. Co-authored-by: Ivan Pasichnyk <ivanpasichnyk@gmail.com> Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Adds CLI wizard that walks users through creating a custom experience: - Rate 13 signals on 0-10 scale (auto-converts to weights) - Choose penalty strictness (lenient/moderate/strict) - Choose learning speed (fast/normal/slow → alpha) - Configure retrieval boosts per memory type - Optional CRM outcome resolver - Shows summary with total weight validation - Saves YAML to ~/.openexp/experiences/ Co-authored-by: Ivan Pasichnyk <ivanpasichnyk@gmail.com> Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Add detection for telegram_sent, slack_sent, pr_merged, ticket_closed, review_approved, and release signals. Update CLI wizard and docs. Co-authored-by: Ivan Pasichnyk <ivanpasichnyk@gmail.com> Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Add project-level experience override: place .openexp.yaml with `experience: <name>` in project root. Resolution priority: project .openexp.yaml → OPENEXP_EXPERIENCE env var → default. Update session-start and session-end hooks to check for project config. Add resolve_experience_name() to experience.py. Co-authored-by: Ivan Pasichnyk <ivanpasichnyk@gmail.com> Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Add memory compaction that clusters similar memories and merges them into compressed entries with Q-value weighted centroids. Core algorithm: - Greedy centroid clustering by cosine similarity - Q-merged = Σ(q_i × sim_i) / Σ(sim_i) per layer per experience - κ (stiffness) = 1/variance(rewards) — compression readiness signal - Originals marked as "merged" via lifecycle transitions - Merged memory gets "confirmed" status and inherits best metadata New files: - openexp/core/compaction.py — clustering, merging, Q computation - tests/test_compaction.py — 16 tests for all core functions CLI: `openexp compact --dry-run [--max-distance 0.25] [--project X]` Co-authored-by: Ivan Pasichnyk <ivanpasichnyk@gmail.com> Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Add reward_log.jsonl as append-only cold storage for complete reward event context. Each reward event gets a unique reward_id (rwd_<8hex>) that links L2 summary → L3 full record. Three levels of Q-value explainability: - L1: Q-value scalar (instant ranking) - L2: reward_contexts with [rwd_...] pointers (quick inspection) - L3: cold storage with full observations, predictions, breakdowns All three reward paths (session, prediction, business) now generate reward_ids and write to cold storage. New MCP tool reward_detail for on-demand L3 access. calibrate_experience_q also writes L3. Co-authored-by: Ivan Pasichnyk <ivanpasichnyk@gmail.com> Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Add human-readable explanations for Q-value changes across all 4 reward paths (session, prediction, business, calibration). Each reward event now generates an explanation via Claude that answers "why did this memory's Q-value change?" - New module: openexp/core/explanation.py (generate + fetch) - explain_q MCP tool aggregates all L4 explanations for a memory - All reward paths now produce L4 explanations in cold storage - 50+ new tests for explanation and reward context Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add interactive visualization capabilities: - openexp/viz.py: data export for dashboards and session replay - openexp/static/replay.html: self-contained session replay viewer - openexp/static/viz.html: memory dashboard template - CLI: `openexp viz --replay latest` and `openexp viz --demo` - 30+ tests for visualization module Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- docs/storage-system.md: comprehensive reference for L0-L4 storage, all 4 reward paths, Q-learning formulas, 16 MCP tools - docs/product-page-content.md: marketing copy for product page - Updated CLAUDE.md and architecture.md to reference new modules - Config: added explanation-related environment variables Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- GitHub Actions: test on Python 3.11/3.12/3.13 with Qdrant service - README: 16 MCP tools (was 8), updated architecture, CLI commands, new docs links, CI badge - Contributing: updated focus areas Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…iltering - hooks: pass $CWD via env var instead of string interpolation to prevent command injection through crafted directory names (session-start.sh, session-end.sh) - experience.py: validate experience names with ^[a-zA-Z0-9_-]+$ regex to prevent path traversal via malicious .openexp.yaml - filters.py: add secret pattern detection (API keys, AWS keys, private keys) to prevent accidental ingestion of credentials into Qdrant - .env.example: stronger recommendation to set QDRANT_API_KEY Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Two critical bugs in the Q-learning reward loop: 1. Session reward was computed from ALL observations across ALL sessions (showing "67 commits, 154 PRs" = lifetime cumulative stats). Now filters to only observations matching the current session_id. 2. Reward was applied to ALL newly ingested memories (2,721 at once) instead of only the 5-10 memories recalled at session start. Now uses reward_retrieved_memories() exclusively — the correct closed-loop path. These bugs made Q-values meaningless (99.8% at identical Q=0.12). Q-cache has been reset to allow clean re-learning. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Observations are now automatically matched against CRM company names during ingestion. This enables the CRM resolver to reward memories when deals progress (e.g., invoiced→paid), closing the business outcome feedback loop. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Major features:
- Decision extraction from session transcripts using Opus 4.6 via claude -p
- Experience auto-detect from prompt keywords (sales, coding, etc.)
- Per-experience Q-value routing in observation ingest
- Q-value wiring fix + cache locking for concurrent sessions
New files:
- openexp/ingest/extract_decisions.py — Opus 4.6 extracts decisions, not actions
- openexp/core/experience.py — experience auto-detection + session tracking
- openexp/data/experiences/{sales,dealflow}.yaml — shipped experience configs
- docs/decision-extraction.md — full reference for extraction system
- tests/test_experience.py — 76 new tests
Documentation:
- 4-phase learning cycle in how-it-works.md
- Updated architecture, storage system, configuration docs
- Decision extraction env vars documented
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Added `pip install -e .` to workflow. Without it, pytest fails with ModuleNotFoundError on all 12 test files. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Added pip install -e . to workflow. Without it, pytest fails with ModuleNotFoundError on all 12 test files. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
import anthropic was running before the _anthropic_client mock check, failing in CI where anthropic package is not installed (commented out in requirements.txt). Now import only runs when client is None, so mocked client is used correctly. 263/263 tests pass. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
import anthropic was running before the _anthropic_client mock check, failing in CI where anthropic package is not installed. Now import only runs when client is None. 263/263 tests pass. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Fix shell injection in session-end.sh: pass all variables via env vars instead of interpolating into Python string literals - Remove hardcoded path ~/.claude/projects/-Users-ivanpasichnyk, use dynamic project directory discovery - Remove personal info from extraction prompt - Fix file descriptor leak in QCache save/load_and_merge (use with statement) - Fix unbound merged_any variable in load_and_merge - Reuse Qdrant singleton in extract_decisions instead of creating new client - Remove unused EXTRACT_MAX_TOKENS variable - Fix type hint inconsistency (QCache | None → Optional[QCache]) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
# Conflicts: # docs/decision-extraction.md # openexp/core/q_value.py # openexp/hooks/session-end.sh # openexp/ingest/extract_decisions.py
Protected memories never receive negative Q-value rewards — their score can only go up. Use for identity, core decisions, safety rules, and critical knowledge that should never decay. - Add `protected` flag to Q-cache data - QValueUpdater.update() and update_all_layers() skip negative rewards for protected memories (still log visits and context) - New MCP tool `protect_memory` to protect/unprotect memories - Show protection status in memory_reward_history - 4 new tests for protection logic Inspired by "LLM Living Memory" architecture (protection classes concept). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
5th reward path: LLM-based re-evaluation of Q-values across time windows. Session rewards only see one session — retrospectives see the full picture, catching cross-session attribution, false progress, and delayed outcomes. - retrospective.py: core engine (gather data, LLM analysis via claude -p, apply adjustments) - retrospective_prompts.py: daily/weekly/monthly prompt templates - q_value.py: add set_q_value() for direct Q-value override - explanation.py: L4 explanation blocks for retrospective reward types - cli.py: `openexp retrospective daily|weekly|monthly [--dry-run]` - 24 new tests (291 total passing) Co-authored-by: Ivan Pasichnyk <ivanpasichnyk@gmail.com> Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
When openexp modules are imported, ANTHROPIC_API_KEY gets set via .env/config. claude -p then uses API credits (empty) instead of Max subscription, causing exit=1 with "Credit balance is too low". Stripping the key forces Max mode. Co-authored-by: Ivan Pasichnyk <ivanpasichnyk@gmail.com> Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* docs: full reward system audit — 5 paths verified against code Added reward-audit-2026-04-08.md with complete code audit of all 5 reward paths (session, prediction, business, calibration, retrospective). Every claim verified with file:line references. Key findings: session rewards too small to influence ranking, prediction path unused (0 real predictions), retrospective 95% wasted on test fixtures leaked into Q-cache, calibration race condition (save_delta vs save). Updated storage-system.md to document Path 5 (retrospective). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: remove session reward heuristic (Path 1) Session reward scored sessions by tool calls (commits +0.3, PRs +0.2) but didn't reflect real session value. Max Q-value produced was 0.031. Ivan's decision: Q-learning should rely on outcome-based rewards only (prediction, CRM business, calibration, retrospective). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: add Qdrant existence check in retrospective apply_adjustments Previously, retrospective only validated memory_id against Q-cache. Test fixture IDs (mem-0001 etc.) passed validation and received 95% of all retrospective rewards. Now validates against Qdrant with graceful fallback if Qdrant is unavailable. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: persist calibration Q-values immediately via save_delta Calibration previously relied on atexit handler to persist changes. If retrospective ran save() between calibration and process exit, calibration values were overwritten. Now save_delta() runs immediately after each calibration, ensuring the delta file exists before any concurrent writer can overwrite it. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * docs: add prediction logging instructions to CLAUDE.md Path 2 (prediction → outcome) had 0 real predictions because Claude was never told to use log_prediction/log_outcome tools. Added instructions to the Memory Protocol section. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * docs: update README to reflect outcome-based reward system Removed references to session reward heuristic (Path 1, removed). Updated learning loop diagram to show 5 outcome-based reward paths: prediction, CRM business, calibration, retrospective, and decision extraction. Updated architecture section and data flow diagram. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * refactor: replace observation pipeline with full transcript ingest Kill the PostToolUse hook and observation pipeline that stored useless "Edited arena.py" entries. Replace with transcript.py that parses Claude Code JSONL transcripts and stores every user/assistant message in Qdrant. Add v2 backlog tracker. - Delete: observation.py, filters.py, reward.py, session_summary.py, post-tool-use.sh and their tests - Add: ingest/transcript.py (parse + embed + batch upsert) - Wire transcript ingest into session-end.sh (Phase 2d) - Rewrite cli.py cmd_ingest to use transcript pipeline - Clean ingest/__init__.py (remove dead ingest_session) - Add backlog.yaml + backlog_cli.py for v2 project tracking 256 tests pass, 0 fail. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> --------- Co-authored-by: Ivan Pasichnyk <ivanpasichnyk@gmail.com> Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
- Add _session_already_ingested() check before ingesting - Skip sessions already in Qdrant (count filter on session_id + source) - Add force=True param to bypass check when needed - Add 22 tests covering parse_transcript, idempotency, batch upsert, payload structure, edge cases (empty, long, system-reminders, etc.) 278 tests pass total. Co-authored-by: Ivan Pasichnyk <ivanpasichnyk@gmail.com> Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
…#30) - --all: scan all project dirs, not just main - --force: re-ingest even if session already stored - Progress indicator: [N/M] session_id... during bulk ingest - Track skipped count for already-ingested sessions Co-authored-by: Ivan Pasichnyk <ivanpasichnyk@gmail.com> Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
New weights: semantic 50%, keyword 15%, recency 20%, importance 15%, Q 0%. Q-value weight stays at 0 until the reward loop (Stage 4) proves it works. Add tests for weights-sum-to-1 and Q-weight-is-zero invariants. Co-authored-by: Ivan Pasichnyk <ivanpasichnyk@gmail.com> Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Add role, session_id, source, date_from, date_to filters to search_memories() and expose them via MCP search_memory tool. Enables filtering by user/assistant role, specific sessions, and date ranges using Qdrant payload filters. Co-authored-by: Ivan Pasichnyk <ivanpasichnyk@gmail.com> Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
SessionEnd: remove dead session summary generation (observations gone), remove complex experience auto-detection, streamline to 2 steps: extract decisions + ingest transcript. 251 → 107 lines. SessionStart: remove dead session summary parsing for query building, simplify query construction. 126 → 96 lines. Co-authored-by: Ivan Pasichnyk <ivanpasichnyk@gmail.com> Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* feat: Experience Library pipeline — chunks, topics, threads, experience labels Add complete pipeline for extracting structured experience from conversation data: - Chunking: group Qdrant transcripts into ~200K token chunks (18 chunks from 26K points) - Topic mapping: per-chunk topic extraction via LLM (170 topics) - Thread grouping: cross-chunk topic merge into work threads (36 threads) - Experience extraction: Opus labels each thread with context→actions→outcome triplets - Batch labeling: script to process all threads (269 experience labels produced) - add_experience() in direct_search: store labels in Qdrant with search-optimized embedding - Reduce MCP tools from 16 to 5 (search_memory, add_memory, log_prediction, log_outcome, memory_stats) - Enable Q-value weight in scoring (0% → 10%) - CLI commands: openexp chunk, openexp topics Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * docs: update README, CLAUDE.md, backlog for Experience Library Update architecture docs to reflect current state: 5 MCP tools (not 16), 300 tests, Experience Library pipeline. Add full experience-library.md documentation. Update backlog with Stage 5 (done) and Stage 6 (next). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * chore: gitignore generated HTML files Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: address security and code review findings Security: - Remove personal name from topic extraction prompt - Validate date format before Qdrant Range filter - Sanitize error messages in memory_stats (no connection details) - Add missing field validation in log_outcome MCP tool - Add format: date to MCP input schema Code quality: - Catch subprocess.TimeoutExpired in _call_opus (both files) - Guard threads.json read in batch_label.py - Fix keyword threshold: >2 chars (catches CRM, bot, MCP), adaptive min_matches - Fix fragile identity check in batch_label assistant-response inclusion - Replace full-scroll session counting with experience_library count - Fix nondeterministic ingest dir (sort by file count) - Fix KeyError crash in CLI topic status display - Fix duplicate status key in backlog.yaml - Avoid 800K string allocation in _estimate_tokens - Add comment explaining payload field duplication intent - Use -latest model IDs instead of date-versioned snapshots Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: use exact=True for Qdrant count in memory_stats exact=False returns approximate counts that were identical (13,381) for all source types. exact=True returns real counts. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> --------- Co-authored-by: Ivan Pasichnyk <ivanpasichnyk@gmail.com> Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
- Rename IVAN: role label to USER: across all files (6 files) - Remove real client names (SQUAD, Scople, МПУВ, Igor Bespalov) from prompts, README examples, tests, backlog - Replace with generic placeholders (Acme Corp, Widget Co, etc.) - Remove docs/outreach-pitch.md (personal marketing template) - Anonymize reward audit doc (paths, client names, quotes) - Replace Ukrainian text in storage-system.md with English - Remove personal email from pyproject.toml - Keep: LICENSE copyright, arXiv citation (publicly known) Co-authored-by: Ivan Pasichnyk <ivanpasichnyk@gmail.com> Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
…ault Qdrant official image runs as root (UID 0) by default. The --user 1000:1000 flag was causing permission issues with the named volume storage because: - The container's /qdrant/storage is owned by root - Forcing UID 1000 denied write access to storage directory Running without --user allows the container to use its default root user, which has proper permissions for the named volume.
Add fallback `|| echo "not_found"` to collection existence check. Without this, the script would silently exit with code 22 when curl fails (e.g., 404 Not Found) or jq encounters null/invalid input. The || fallback ensures COLLECTION_EXISTS is always set to a valid string, allowing the conditional logic to proceed correctly.
Fix two jq-related errors during Claude Code hooks registration: 1. Null containment error: Change .command | contains() to (.command // "") | contains() to handle hook items without command field. 2. Array iteration error: Change any(.[]; ...) to any(.hooks.SessionStart[]; ...) and use += instead of . + [...] to properly append to hook arrays. Fixes: 'null cannot have containment checked' and 'object and array cannot be added' errors.
Add dealflow experience for deal pipeline workflows (lead → payment) with 5 new reward signals: proposal_sent, invoice_sent, call_scheduled, nda_exchanged, payment_received. Weights derived from real CRM data. Add comprehensive experiences guide (docs/experiences.md) with: - Full signal tables for all 3 shipped experiences - Step-by-step creation guide with questionnaire - Rating-to-weight conversion table - DevOps, Content, Researcher example profiles Update README, configuration, and how-it-works docs. Co-authored-by: Ivan Pasichnyk <ivanpasichnyk@gmail.com> Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
11d6218 to
b5de321
Compare
|
Hey @alpiua — first off, thank you for taking the time to dig into Apologies for the long silence — the PR fell through the cracks for two weeks, that's on me. I cherry-picked your three commits onto current If you've found other rough edges, please open issues or PRs — I'll be more responsive going forward. And if you want to be credited differently in the merge commit (different name/email), let me know and I'll patch the trailer. |
This PR fixes multiple jq and Docker issues that caused setup to fail.
Fixes
1. jq null containment error
Error:
null and string cannot have their containment checkedFix: Use
(.command // "") | contains("openexp")to handle null command fields.2. jq array iteration error
Error:
object and array cannot be addedFix: Use explicit array paths
any(.hooks.SessionStart[]; ...)and+=operator.3. Qdrant container fails
Error: Permission denied on storage
Fix: Remove
--user 1000:1000flag – Qdrant runs as root (UID 0) by default.4. Silent exit on collection check
Error: Exit code 22 with no message
Fix: Add
|| echo "not_found"fallback when curl 404s.Setup now completes all 7 steps successfully.